Analyzing and interpreting cone-beam computed tomography (CBCT) images is a complicated and often time-consuming process. In this study, we present two different architectures of multi-channel deep learning (DL) models "Ensemble" and "Synchronized multi-channel", to automatically identify and classify skeletal malocclusions from 3D CBCT craniofacial images. These multi-channel models combine three individual single-channel base models using a voting scheme and a two-step learning process, respectively, to simultaneously extract and learn a visual representation from three different directional views of 2D images generated from a single 3D CBCT image. We also employ a visualization method called "Class-selective Relevance Mapping" (CRM) to explain the learned behavior of our DL models by localizing and highlighting a discriminative area within an input image. Our multi-channel models achieve significantly better performance overall (accuracy exceeding 93%), compared to single-channel DL models that only take one specific directional view of 2D projected image as an input. In addition, CRM visually demonstrates that a DL model based on the sagittal-left view of 2D images outperforms those based on other directional 2D images.Clinical Relevance- the proposed method aims at assisting orthodontist to determine the best treatment path for the patient be it orthodontic or surgical treatment or a combination of both.Intracranial hemorrhage (ICH) is a life-threatening condition, the outcome of which is associated with stroke, trauma, aneurysm, vascular malformations, high blood pressure, illicit drugs and blood clotting disorders. In this study, we presented the feasibility of the automatic identification and classification of ICH using a head CT image based on deep learning technique. The subtypes of ICH for the classification was intraparenchymal, intraventricular, subarachnoid, subdural and epidural. We first performed windowing to provide three different images brain window, bone window and subdural window, and trained 4,516,842 head CT images using CNN-LSTM model. https://www.selleckchem.com/products/reversan.html We used the Xception model for the deep CNN, and 64 nodes and 32 timesteps for LSTM. For the performance evaluation, we tested 727,392 head CT images, and found the resultant weighted multi-label logarithmic loss was 0.07528. We believe that our proposed method enhances the accuracy of ICH identification and classification and can assist radiologists in the interpretation of head CT images, particularly for brain-related quantitative analysis.Many ocular diseases are associated with choroidal changes. Therefore, it is crucial to be able to segment the choroid to study its properties. Previous methods for choroidal segmentation have focused on single cross-sectional scans. Volumetric choroidal segmentation has yet to be widely reported. In this paper, we propose a sequential segmentation approach using a variation of U-Net with a bidirectional C-LSTM(Convolutional Long Short Term Memory) module in the bottleneck region. The model is evaluated on volumetric scans from 40 high myopia subjects, obtained using SS-OCT(Swept Source Optical Coherence Tomography). A comparison with other U-Net-based variants is also presented. The results demonstrate that volumetric segmentation of the choroid can be achieved with an accuracy of IoU(Intersection over Union) 0.92.Clinical relevance- This deep learning approach can automatically segment the choroidal volume, which can enable better evaluation and monitoring at ocular diseases.Pulmonary fissure segmentation is important for localization of lung lesions which include nodules at respective lobar territories. This can be very useful for diagnosis as well as treatment planning. In this paper, we propose a novel coarse-to-fine fissure segmentation approach by proposing a Multi-View Deep Learning driven Iterative WaterShed Algorithm (MDL-IWS). Coarse fissure segmentation obtained from multi-view deep learning yields incomplete fissure volume of interest (VOI) with additional false positives. An iterative watershed algorithm (IWS) is presented to achieve fine segmentation of fissure surfaces. As a part of the IWS algorithm, surface fitting is used to generate a more accurate fissure VOI with substantial reduction in false positives. Additionally, a weight map is used to reduce the over-segmentation of watershed in subsequent iterations. Experiments on the publicly available LOLA11 dataset clearly reveal that our method outperforms several state-of-the-art competitors.In endoscopic surgery, it is necessary to understand the three-dimensional structure of the target region to improve safety. For organs that do not deform much during surgery, preoperative computed tomography (CT) images can be used to understand their three-dimensional structure, however, deformation estimation is necessary for organs that deform substantially. Even though the intraoperative deformation estimation of organs has been widely studied, two-dimensional organ region segmentations from camera images are necessary to perform this estimation. In this paper, we propose a region segmentation method using U-net for the lung, which is an organ that deforms substantially during surgery. Because the accuracy of the results for smoker lungs is lower than that for non-smoker lungs, we improved the accuracy by translating the texture of the lung surface using a CycleGAN.Multiphase computed tomographic angiography (CTA) have been demonstrated to be a reliable imaging tool for evaluating cerebral collateral circulation that can be used to select acute ischemic patients for recanalization therapy. We proposed using bone subtraction techniques to visualize multiphase CTA for clinicians to make fast and consistent decisions in the imaging triage of acute stroke patients. A total of 40 multiphase brain CTA datasets were collected and processed by two bone subtraction methods. The reference method used pre-contrast (phase 0) scans to create ground truth bone masks by thresholding. The tested method used only contrast enhanced (phases 1, 2, and 3) scans to extract bone masks with two versions (U-net and atrous) of 3D multichannel convolution neural networks (CNNs) in a supervised deep learning paradigm for semantic segmentation. Half (n = 20) of the datasets were used to train and half (n = 20) were used to test the conventional 3D U-net and a patch-based 3D multichannel atrous CNN.